Current state of emotion of a person is highly related to what entertainment content he/she want to listen or watch. An emotion-based content recommendation will help the user to not only to get content according to their current state of mind but also reduce the efforts of managing a playlist for music and help them reduce their stress level by recommending them appropriate content for stress relief. Emotion of a person can be determined using his/her facial expression. This facial expression can be detected using a machine learning model, we have developed a model using xception architecture. An application which will access the camera of the device and take image of the persons face, it will connect to the ML Kit stored on the cloud (Firebase) which will analyze the image and detect the mood of the user, from that mood it will connect to API of a music and movies application (E.g., Spotify, Netflix, Disney Hotstar, etc.), though which we will recommend the content. The application will also verify from the user for his/her taste of music and customize the recommendation accordingly. The user will be prompted for change in emotion after specific intervals if he/she likes to change the content.
Introduction
I. INTRODUCTION
A. Background/Context
Facial expressions are the way to tell about our emotions. Computer frameworks based on full of feeling interaction may play an imperative part within another era of computer vision frameworks. Confront feeling can be utilized in ranges of security, amusement, and human machine interface s. (HMI). A human can express his/her emotion through lip and eye. This work describes the development of Emotion Based Content Recommendations, which is a computer application meant for users to minimize their endeavors in overseeing expansive playlists, moreover what to observe motion pictures to observe particularly when they require the substance based on their current. This will also help the person in stress, tension, depression, sadness to give provide them a song. The proposed model will extract user’s facial expressions and features to determine the current mood of the user. Once the emotion Is detected, playlist of songs suitable to the mood of the user will be presented to him. It aims to provide better enjoyment to the music lovers in music listening. In the model, following moods are included: Happy, Sad, fearful, surprised, angry and neutral. The system involves the image processing and facial detection processes. The input to the model is still images of user which are further processed to determine the mood of user. The system will capture the image of the user at the start of the application. The images are captured using webcam. The image captured previously will be saved and passed to the rendering phase. The mood of the user may not be same after some time; it may or may not change. Thus, the image is captured after every decided interval of time. And then that image will be forwarded to the next phase.
B. Aim and Objective
The fundamental plan behind this project is to play songs according on the user's moods. Its goal is to bring feeling awareness to user-favorite objects. Users within the current system need to manually decide the songs since willy-nilly contend songs might not match the user's mood. To do so, the user should 1st classify the songs into varied emotions, and then manually choose a precise feeling before enjoying the songs. Exploitation associate Emotion-based music player will assist you avoid these problems. The music is going to be contend from the required folders supported the feeling.
II. THEORETICAL DESCRIPTION
A. Theoretical Description
In this case, we'd create a online application so that the user could log in to their account and access the features. A camera or an existing photo will be required of the user in order to submit it and receive recommendations based on it. For of essential image processing, we will utilize OpenCV and Haar Cascade, which will aid in the extraction face characteristics such as eyes and lips, as well as feature-point recognition. These features will be given to a pretrained xception model which predict the user’s mood. This would be connected to the cloud, where a list of songs and other items would be saved according to the user's mood, and it would also accept user comments on the recommendations for future enhancements.
B. Resources Required
Hardware Requirements: Mobile, laptop or desktop with a camera and internet facility
Software Requirements: for desktop or laptop Windows 7 or later, Python 3.7 or later, tkinter
III. ALGORITHM
The complete project has been divided into two major parts i.e. Doctor and patient side. Patient side: Data is acquired via various sensors; temperature, pulse rate and SpO2 sensor. That are connected to RPi. Data is analyzed on raspberry and then. Simultaneously updated on cloud. In case of anomaly in the data readings of vital signs, an alert is sent via SMS to doctor and hospital staff. Report are generated in RPi which can be analyzed by things board to the supervising doctor.
Doctor Side: It will first check whether user is admin, doctor
or, patient on authentication. If user is admin it will show following options are show all doctors, show all patients, remove doctor permanently and remove patient permanently If user is doctor it will show following options are see all requested patient, see all patients of my specialty, see my patients, see patients report and remove patient. Here doctor can also live monitor the patients via thingsboard. If user is Patient it will show following options are add Report, see all doctors for my problem, Request Doctor, Remove Request, See My Doctor Stats. This complete process is carried out via a graphical user interface which is installed in the lobby of hospital or clinic.
IV. SYSTEM DESIGN
A. Block Wise Design
In Fig. 4.1 It is the flow chart for the application will get the emotion of the user and recommend the content.
GUI Of System: To display the user’s live feed, from which the emotion is retrieved, the list of recommended content and also allowing the user to go to the respective webpages for the content.
Emotion Detection Model: Using a haar cascade model to extract the face from a live feed video and giving it to ML model build on xception architecture that will be used to predict the emotion of user. In xception architecture there are mainly three flows as entry flow, middle flow and exit flow. The data is 48x48 grayscale image which need to be normalized (The values are between 0 to 255 which need to be drop down to 0 to 1 for this we divide all the values of image by 255), and extend dimension of numpy array of image data (Making the image data three dimension numpy and make it like a 48x48x48 image i.e., a RGB image which can be done using and extend_dim and repeat feature of numpy which will add the 3rd dimension to numpy array and then repeat it three time to make it 48x48x3 image).
Then the data before giving it to the xception model is given to ImageDataGenerator, this ImageDataGenerator will randomly do operation on the image like zoom, rotate at random angle between 0-to-180-degree, shift image horizontally or vertically, flip image horizontally or vertically.
The input to tis layers is of size (48, 48, 1), i.e., 49x48 grayscale image. There are two parts to the entry flow model.
First part has three layers repeated twice (see Fig. 3.1)
There are two side i.e., patient and doctor. In patient side sensors will measure all the vital parameters like temperature, pulse and oxygen level. And all these data are uploaded on the things board and all these data can be live monitored by the doctor. And in doctor side, GUI is available through which it will help to manage patients and doctor details.
VI. RESULT
The following are the end results of the conetent recommendation application application. It shows the video block from which the users live video feed is takem and used the haar cascade model to extract or detect the the face in the live video which is then given to the pretrained model discussed above to get the recommendation for thr
VII. FUTURE SCOPE
This system will provide the content based on their mood i.e., facial expressions this system can be improved and added with more functionalities as follows, improve accuracy for the emotion detection to match the exact mood every time, recommend based on the surrounding like gym, outdoor activities. integrate it into wearable to predict their mood using their heart pulse. can be used to determine mood of physically challenged & mentally challenged people, connecting with random people according to the mood, parental control mode to track their children mood or stress level, recommending contents from social media such as YouTube, Reddit, Instagram, etc. based on mood.
Conclusion
The developed application uses the user\'s camera to take pictures using a good architecture, capture emotions, and recommend music according to mood and choice. This reduces the effort of users to create and manage playlists and bring new content to the market to keep up with songs. Providing the best or appropriate songs according to the user\'s current emotions, and by providing songs to relieve stress and sadness, it provides better enjoyment for music listeners. It not only helps the user, but also systematically categorizes the songs.
References
[1] Kaggle - https://www.kaggle.com/ahmedmoorsy/facial-expression/
[2] Kosti, Ronak, Jose M. Alvarez, Adria Recasens, and Agata Lapedriza. \"Context based emotion recognition using emotic dataset.\" IEEE transactions on pattern analysis and machine intelligence 42, no. 11 (2019): 2755-2766.
[3] Kosti, Ronak, Jose M. Alvarez, Adria Recasens, and Agata Lapedriza. \"Emotion recognition in context.\" In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1667-1675. 2017.
[4] R Vemulapalli, A Agarwala, “A Compact Embedding for Facial Expression Similarity”, CoRR, abs/1811.11283, 2018.
[5] UIBVFED: Virtual facial expression dataset Miquel Mascaró Oliver, Esperança Amengual Alcover Published: April 6, 2020.
[6] Nikhil Zaware, Tejas Rajgure, Amey Bhadang, D.D. Sakpal “EMOTION BASED MUSIC PLAYER” International Journal of Innovative Research & Development, Volume 3, Issue 3, 2014.